5,511 research outputs found

    Killing Two Birds with One Stone: Quantization Achieves Privacy in Distributed Learning

    Full text link
    Communication efficiency and privacy protection are two critical issues in distributed machine learning. Existing methods tackle these two issues separately and may have a high implementation complexity that constrains their application in a resource-limited environment. We propose a comprehensive quantization-based solution that could simultaneously achieve communication efficiency and privacy protection, providing new insights into the correlated nature of communication and privacy. Specifically, we demonstrate the effectiveness of our proposed solutions in the distributed stochastic gradient descent (SGD) framework by adding binomial noise to the uniformly quantized gradients to reach the desired differential privacy level but with a minor sacrifice in communication efficiency. We theoretically capture the new trade-offs between communication, privacy, and learning performance

    Effects of food wastes on yellow mealworm Tenebriomolitor larval nutritional profiles and growth performances

    Get PDF
    In this study, nutritional profiles and growth performances of yellow mealworm, Tenebriomolitor larvae (TML) were assessed cultivated using common food wastes i.e. watermelon rinds, broilers’ eggshells and banana peels. Nutritional profiles and growth performance of TML were evaluated after 28-day feeding trial. Post-feeding proximate analysis showed significant increment of nutritional contents compared to the control groups; whereby TML demonstrated highest level of crude protein (43.38%±2.71), moisture (9.74%±0.23) and ash (4.40%±0.22) in the group treated with watermelon wastes. On the other hand, TML showed highest level of crude fibre (8.73%±0.05) when treated with broilers’ eggshells; and higher level of crude fat (40.13%±4.66) with banana wastes. Nitrogen-free extract (NFE) contents were also noticed higher in the group treated with banana wastes (4.46%±5.30). In terms of growth performance, TML administrated with watermelon wastes demonstrated superior in specific growth rate (2.50%±0.43) and feed conversion efficiency (0.10%±0.01). Interestingly, TML grown with banana wastes showed highest survival rate (97.5%) among all. In short, TML cultivation using watermelon and banana wastes showed a promising result on nutritional fortification and growth enhancement

    Telesonar: Robocall Alarm System by Detecting Echo Channel and Breath Timing

    Get PDF

    (Sulfasalazinato-κO)bis­(triphenyl­phosphine-κP)copper(I)

    Get PDF
    The title mixed-ligand copper(I) complex, [Cu(C18H13N4O5S)(C18H15P)2], was synthesized via solvothermal reaction of [Cu(PPh3)2(MeCN)2]ClO4 and sulfasalazine [systematic name: 2-hydr­oxy-5-(2-{4-[(2-pyridylamino)sulfon­yl]phen­yl}diazen­yl)benzoic acid]. The mononuclear complex displays a trigonal coordination geometry for the Cu(I) atom, which is surrounded by two P-atom donors from two different PPh3 ligands and one O-atom donor from the monodentate carboxyl­ate group of the sulfasalazinate ligand. The latter ligand is found in a zwitterionic form, with a deprotonated amine N atom and a protonated pyridine N atom. Such a feature was previously described for free sulfasalazine. The crystal structure is stabilized by C—H⋯O, C—H⋯N, N—H⋯N and O—H⋯O hydrogen bonds

    Analyzing and Mitigating Interference in Neural Architecture Search

    Full text link
    Weight sharing is a popular approach to reduce the cost of neural architecture search (NAS) by reusing the weights of shared operators from previously trained child models. However, the rank correlation between the estimated accuracy and ground truth accuracy of those child models is low due to the interference among different child models caused by weight sharing. In this paper, we investigate the interference issue by sampling different child models and calculating the gradient similarity of shared operators, and observe: 1) the interference on a shared operator between two child models is positively correlated with the number of different operators; 2) the interference is smaller when the inputs and outputs of the shared operator are more similar. Inspired by these two observations, we propose two approaches to mitigate the interference: 1) MAGIC-T: rather than randomly sampling child models for optimization, we propose a gradual modification scheme by modifying one operator between adjacent optimization steps to minimize the interference on the shared operators; 2) MAGIC-A: forcing the inputs and outputs of the operator across all child models to be similar to reduce the interference. Experiments on a BERT search space verify that mitigating interference via each of our proposed methods improves the rank correlation of super-pet and combining both methods can achieve better results. Our discovered architecture outperforms RoBERTabase_{\rm base} by 1.1 and 0.6 points and ELECTRAbase_{\rm base} by 1.6 and 1.1 points on the dev and test set of GLUE benchmark. Extensive results on the BERT compression, reading comprehension and ImageNet task demonstrate the effectiveness and generality of our proposed methods.Comment: ICML 2022, Spotligh
    corecore